Contrastive learning has been successfully used for retrieval of semantically aligned sentences, but it often requires large batch sizes or careful engineering to work well. In this paper, we instead propose a generative model for learning multilingual text embeddings which can be used to retrieve or score sentence pairs. Our model operates on parallel data in $N$ languages and, through an approximation we introduce, efficiently encourages source separation in this multilingual setting, separating semantic information that is shared between translations from stylistic or language-specific variation. We show careful large-scale comparisons between contrastive and generation-based approaches for learning multilingual text embeddings, a comparison that has not been done to the best of our knowledge despite the popularity of these approaches. We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval -- the last of which we introduce in this paper. Overall, our Variational Multilingual Source-Separation Transformer (VMSST) model outperforms both a strong contrastive and generative baseline on these tasks.
translated by 谷歌翻译
Fusion-in-Decoder (FiD) is a powerful retrieval-augmented language model that sets the state-of-the-art on many knowledge-intensive NLP tasks. However, FiD suffers from very expensive inference. We show that the majority of inference time results from memory bandwidth constraints in the decoder, and propose two simple changes to the FiD architecture to speed up inference by 7x. The faster decoder inference then allows for a much larger decoder. We denote FiD with the above modifications as FiDO, and show that it strongly improves performance over existing FiD models for a wide range of inference budgets. For example, FiDO-Large-XXL performs faster inference than FiD-Base and achieves better performance than FiD-Large.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译
Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and computation in the multi-step `thought' process. To disentangle computation from reasoning, we propose `Program of Thoughts' (PoT), which uses language models (mainly Codex) to express the reasoning process as a program. The computation is relegated to an external computer, which executes the generated programs to derive the answer. We evaluate PoT on five math word problem datasets (GSM, AQuA, SVAMP, TabMWP, MultiArith) and three financial-QA datasets (FinQA, ConvFinQA, TATQA) for both few-shot and zero-shot setups. Under both few-shot and zero-shot settings, PoT can show an average performance gain over CoT by around 12\% across all the evaluated datasets. By combining PoT with self-consistency decoding, we can achieve SoTA performance on all math problem datasets and near-SoTA performance on financial datasets. All of our data and code are released in Github\footnote{\url{https://github.com/wenhuchen/Program-of-Thoughts}}.
translated by 谷歌翻译
关于文本到图像生成的研究在产生多样化和照片现实的图像方面取得了重大进展,这是由在大规模图像文本数据上训练的扩散和自动回归模型驱动的。尽管最先进的模型可以产生共同实体的高质量图像,但它们通常很难产生不常见的实体的图像,例如“ chortai(dog)”或“ picarones(食物)”。为了解决此问题,我们介绍了检索型的文本对图像生成器(Re-Imagen),这是一种生成模型,它使用检索到的信息来产生高保真和忠实的图像,即使对于稀有或看不见的实体也是如此。给定文本提示,重新构造访问外部多模式知识库以检索相关(图像,文本)对,并将它们用作引用来生成图像。通过此检索步骤,重新构造的知识是对上述实体的高级语义和低级视觉细节的了解,从而提高了其在产生实体视觉外观的准确性。我们在包含(图像,文本,检索)的构造数据集上训练Re-Imagen,以教导该模型在文本提示和检索上扎根。此外,我们制定了一种新的抽样策略,以使文本和检索条件的无分类指南交流,以平衡文本和检索对齐。 Re-Imagen在两个图像生成基准上获得了新的SOTA FID结果,例如Coco(IE,FID = 5.25)和Wikiimage(即FID = 5.82),而无需微调。为了进一步评估该模型的功能,我们介绍了EntityDrawBench,这是一种新的基准测试,可评估从多个视觉域的各种实体的图像生成,从频繁到稀有。人类对EntityDrawBench的评估表明,Re-Imagen与照片现实主义中最好的先前模型相同,但具有明显的忠诚,尤其是在较不频繁的实体上。
translated by 谷歌翻译
我们引入了一种新的文化学习范式,以测量在推理过程中学习新颖单词的大型语言模型(LLMS)。特别是,我们通过用一个合成但合理的词代替关键概念词来重写Winograd风格的共同参考分辨率问题,该词必须理解该模型以完成任务。解决此任务需要模型来利用提示中给出的新单词的字典定义。这个基准介绍了单词获取,这是折磨llms已知的历时降解的一个重要方面。由于LLM在训练的那一刻及时被冻结,因此通常无法反映语言随着时间的变化方式。我们表明,与原始Winograd任务相比,LLM的准确性在我们的基准测试中从根本上降低,从而确定了当前模型的局限性,并提供了基准来衡量LLMS的未来改善LLMS进行内在学习的能力。
translated by 谷歌翻译
在该职位论文中,我们提出了一种新方法,以基于问题的产生和实体链接来生成文本的知识库(KB)。我们认为,所提出的KB类型具有传统符号KB的许多关键优势:尤其是由小型模块化组件组成,可以在组合上合并以回答复杂的查询,包括涉及“多跳跃”的关系查询和查询。“推论。但是,与传统的KB不同,该信息商店与常见的用户信息需求相符。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
在尝试“解释”机器学习模型的预测中,研究人员提出了数百种技术,以归因于认为重要的功能的预测。虽然这些归属常常被声称持有改善人类“了解”模型的潜力,但令人惊讶地小的工作明确评估了对这种愿望的进步。在本文中,我们进行了一个众群研究,参与者与欺骗检测模型进行互动,以区分真实和假酒店评论。他们受到模拟新鲜评论模型的挑战,并以降低最初预测的类的概率的目标。成功的操纵将导致对抗性示例。在培训(但不是测试)阶段,突出显示输入跨度以传达Parience。通过我们的评估,我们观察到,对于线性袋式模型,与无解释控制相比,可以在训练期间访问特征系数的参与者能够在测试阶段中更大减少模型置信度。对于基于BERT的分类器,流行的本地解释不会提高它们在无法解释案例上降低模型信心的能力。值得注意的是,当由培训的线性模型的(全局)归属的(全局)归属给出的解释以模仿BERT模型,人们可以有效地操纵模型。
translated by 谷歌翻译
虽然许多方法旨在通过突出突出特征来解释预测,但是这些解释服务的目标以及如何评估它们通常不合适。在这项工作中,我们介绍了一个框架,通过在训练教师模型的学生模型上授予学生模型的准确性增益来量化解释的价值。至关重要的是,培训期间学生可以使用解释,但在测试时间不可用。与先前的建议相比,我们的方法不太易于绘制,实现原则,自动,模型 - 无话会的归属。使用我们的框架,我们比较了许多归属方法,用于文本分类和问题应答,并观察不同学生模型架构和学习策略之间的定量差异(在中度到高度)。
translated by 谷歌翻译